2020³â ÄÄÇ»ÅÍÁ¾ÇÕÇмú´ëȸ
ÇѱÛÁ¦¸ñ(Korean Title) |
An Incentive Design to Perform Federated Learning |
¿µ¹®Á¦¸ñ(English Title) |
An Incentive Design to Perform Federated Learning |
ÀúÀÚ(Author) |
Shashi Raj Pandey
Sabah Suhail
Yan Kyaw Tun
Madyan Alsenwi
Choong Seon Hong
|
¿ø¹®¼ö·Ïó(Citation) |
VOL 47 NO. 01 PP. 0815 ~ 0817 (2020. 07) |
Çѱ۳»¿ë (Korean Abstract) |
|
¿µ¹®³»¿ë (English Abstract) |
Federated Learning is a distributed model training approach having privacy preserving benefits. Under this machine learning approach, a number of devices will collaborate to train a learning model of interest without sharing their raw dataset. Instead, they share their local model parameters, and an aggregator such as a central multi-access edge computing (MEC) server will do the local parameters aggregation to build a global model. The global model is then broadcast back to the devices for the next round of local training, and the iterative process continues henceforth. However, in doing so, a key challenge is to ensure a number of active devices joining in the training process. In this work, we propose an incentive-based methodology to ensure active participation for federated learning, namely Crowd-FL. In particular, we characterize the impact of participation for a level of local accuracy and evaluate the learning performance against the random device selection approach as the baseline. To that end, we derive a utility maximization problem for incorporating participation and model training paradigm. Simulation results show that our proposed methodology will not only improve the average model accuracy, but also minimize the number of global communication rounds. |
Å°¿öµå(Keyword) |
federated learning
incentive mechanism
crowdsourcing
utility maximization
|
ÆÄÀÏ÷ºÎ |
PDF ´Ù¿î·Îµå
|